Guild icon
Project Sekai
🔒 UMDCTF 2023 / ❌-ml-alakazams-password
Avatar
Alakazam's Password - 500 points
Category: Ml Description: These Pokémon keep losing their passwords. This time it's Alakazam. Before he lost it, it was saved in some de facto weird format he's never seen before. Maybe he needs a bigger brain. Can you recover his password? Author: Segal https://drive.google.com/file/d/14qTfcZfom6zvoJTMz_KQqL9ZPepqVoF3/view Files: No files. Tags: No tags.
Sutx pinned a message to this channel. 04/28/2023 3:01 PM
Avatar
@Violin wants to collaborate 🤝
Avatar
@fleming wants to collaborate 🤝
Avatar
Please redownload the file. Was the wrong file, is now fixed.
Avatar
@deuterium wants to collaborate 🤝
Avatar
Feels like loading and figuring out the model is half the work here
Avatar
@unpickled admin bot wants to collaborate 🤝
Avatar
Trying to figure out the ckpt
Avatar
Avatar
deuterium
Trying to figure out the ckpt
unpickled admin bot 04/30/2023 2:57 AM
those are the weights
02:58
not sure if it includes the biases
02:58
just remember it has weights from when i did stuff with the stable diffusion ckpt
02:58
A CKPT file is a checkpoint file created by PyTorch Lightning, a PyTorch research framework. It contains a dump of a PyTorch Lightning machine learning model. Developers create CKPT files to preserve the previous states of a machine learning model, while training it to its final state. this indicates it will have both biases and weights
Avatar
Yeah but no indication of model though ckpt = tf.train.load_checkpoint("step-000029999.ckpt") Could not open step-000029999.ckpt: DATA_LOSS: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator? Traceback (most recent call last): File "/home/deuterium/.local/lib/python3.10/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 92, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern)) RuntimeError: Unable to open table file step-000029999.ckpt: DATA_LOSS: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Avatar
unpickled admin bot 04/30/2023 3:00 AM
its a full save of the model
03:00
so
03:00
it contains everything
Avatar
and loading it via model = tf.keras.models.load_model("step-000029999.ckpt") fid = h5f.open(name, flags, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 106, in h5py.h5f.open OSError: Unable to open file (file signature not found)
Avatar
unpickled admin bot 04/30/2023 3:00 AM
presumably corrupted?
Avatar
could be
Avatar
unpickled admin bot 04/30/2023 3:00 AM
can you try loading in the stable diffusion ckpt
03:01
disclaimer
03:01
this is big
03:01
very big
03:01
2 gb i think
03:01
but its a valid ckpt
Avatar
looks some kind of ray tracing 3d model?
Avatar
Avatar
unpickled admin bot
but its a valid ckpt
I have this ckpt, let me try
Avatar
unpickled admin bot 04/30/2023 3:02 AM
kk
03:02
ty
Avatar
getting same error, so probably there's some way let me check stable diffusion webui model loading XD
Avatar
A collaboration friendly studio for NeRFs. Contribute to nerfstudio-project/nerfstudio development by creating an account on GitHub.
Avatar
Avatar
deuterium
Its most probably this, installing it seems to be pain
05:26
import torch import torch.nn as nn checkpoint = torch.load("./step-000029999.ckpt") # printing stuff for chadgpt step = checkpoint['step'] pipeline = checkpoint['pipeline'] optimizers = checkpoint['optimizers'] scalers = checkpoint['scalers'] # printing for chadgpt to figure out architecture print("scalers:",scalers) print("step:", step) for i in pipeline: if pipeline[i].ndim == 0: print(i,pipeline[i]) else: print(i,pipeline[i].size()) for opt in optimizers: print("optimizer:", opt) print(optimizers[opt]["param_groups"]) for opt_state_num, opt_state in optimizers[opt]["state"].items(): print(opt_state_num,"step",opt_state["step"]) print(opt_state_num,"exp_avg", opt_state["exp_avg"].size()) print(opt_state_num,"exp_avg_sq", opt_state["exp_avg_sq"].size())
05:27
It comes up with this architecture class Field(nn.Module): def __init__(self): super(Field, self).__init__() self.aabb = torch.Tensor(2, 3) self.embedding_appearance = nn.Linear(224, 32) # Direction encoding and position encoding are not specified in the output self.mlp_base = ... # Define MLP base with 12,199,312 parameters self.mlp_head = ... # Define MLP head with 9,216 parameters class ProposalNetwork(nn.Module): def __init__(self): super(ProposalNetwork, self).__init__() self.aabb = torch.Tensor(2, 3) self.mlp_base = ... # Define MLP base with 767,040 parameters class CameraOptimizer(nn.Module): def __init__(self): super(CameraOptimizer, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class RayGenerator(nn.Module): def __init__(self): super(RayGenerator, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.field = Field() self.proposal_networks = nn.ModuleList([ProposalNetwork(), ProposalNetwork()]) self.train_camera_optimizer = CameraOptimizer() self.train_ray_generator = RayGenerator() self.eval_camera_optimizer = CameraOptimizer() self.eval_ray_generator = RayGenerator() def forward(self, x): pass # Define the forward pass based on the model components
05:27
which is wrong
Avatar
ill join in 3hrs
Avatar
@deuterium you can open ticket and sanity check model loading
06:16
author will tell you about whether ur approach is right
06:16
if you still have issue loading model
Avatar
try to get it loaded so i can start when im back
Avatar
@kanon wants to collaborate 🤝
Avatar
anything?
08:00
will be back in a bit
08:00
@deuterium did we even load the checkpoint now
Avatar
there is a hint on train_ray_generator
Avatar
Avatar
sahuang
@deuterium did we even load the checkpoint now
yeah check the code above
08:30
the problem is to know the architecture
Avatar
yeah idk this model
09:08
very ML specific
Avatar
Avatar
deuterium
most probably this
Avatar
any output given? whats the goal?
Avatar
just a ckpt is given
09:10
but how do we recover password without data
09:10
password=input maybe
Avatar
possible
Avatar
yeah no output tho, confused
Avatar
@hfz wants to collaborate 🤝
09:26
the model loaded just fine
09:27
looks like an intermediate model state, usually done to keep the best model before overfitting (early stopping)
Avatar
@layka_ wants to collaborate 🤝
Avatar
@chenx3n wants to collaborate 🤝
Avatar
Avatar
hfz
looks like an intermediate model state, usually done to keep the best model before overfitting (early stopping)
unpickled admin bot 04/30/2023 1:33 PM
if you got it to load can you send everything the weights, gradients, and the model (as in nn.linear/activation functions) (edited)
Avatar
i think it's there
Avatar
Avatar
deuterium
import torch import torch.nn as nn checkpoint = torch.load("./step-000029999.ckpt") # printing stuff for chadgpt step = checkpoint['step'] pipeline = checkpoint['pipeline'] optimizers = checkpoint['optimizers'] scalers = checkpoint['scalers'] # printing for chadgpt to figure out architecture print("scalers:",scalers) print("step:", step) for i in pipeline: if pipeline[i].ndim == 0: print(i,pipeline[i]) else: print(i,pipeline[i].size()) for opt in optimizers: print("optimizer:", opt) print(optimizers[opt]["param_groups"]) for opt_state_num, opt_state in optimizers[opt]["state"].items(): print(opt_state_num,"step",opt_state["step"]) print(opt_state_num,"exp_avg", opt_state["exp_avg"].size()) print(opt_state_num,"exp_avg_sq", opt_state["exp_avg_sq"].size())
run this i assume
Avatar
Avatar
deuterium
It comes up with this architecture class Field(nn.Module): def __init__(self): super(Field, self).__init__() self.aabb = torch.Tensor(2, 3) self.embedding_appearance = nn.Linear(224, 32) # Direction encoding and position encoding are not specified in the output self.mlp_base = ... # Define MLP base with 12,199,312 parameters self.mlp_head = ... # Define MLP head with 9,216 parameters class ProposalNetwork(nn.Module): def __init__(self): super(ProposalNetwork, self).__init__() self.aabb = torch.Tensor(2, 3) self.mlp_base = ... # Define MLP base with 767,040 parameters class CameraOptimizer(nn.Module): def __init__(self): super(CameraOptimizer, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class RayGenerator(nn.Module): def __init__(self): super(RayGenerator, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.field = Field() self.proposal_networks = nn.ModuleList([ProposalNetwork(), ProposalNetwork()]) self.train_camera_optimizer = CameraOptimizer() self.train_ray_generator = RayGenerator() self.eval_camera_optimizer = CameraOptimizer() self.eval_ray_generator = RayGenerator() def forward(self, x): pass # Define the forward pass based on the model components
will get this
Avatar
unpickled admin bot 04/30/2023 1:44 PM
oh tyty
Avatar
Avatar
sahuang
will get this
no no, this is just a dummy model
Avatar
not sure i havent run it cuz busy with other 2 chals, now that they were solved im back for final grind on this
13:44
oh ok
Avatar
Avatar
deuterium
It comes up with this architecture class Field(nn.Module): def __init__(self): super(Field, self).__init__() self.aabb = torch.Tensor(2, 3) self.embedding_appearance = nn.Linear(224, 32) # Direction encoding and position encoding are not specified in the output self.mlp_base = ... # Define MLP base with 12,199,312 parameters self.mlp_head = ... # Define MLP head with 9,216 parameters class ProposalNetwork(nn.Module): def __init__(self): super(ProposalNetwork, self).__init__() self.aabb = torch.Tensor(2, 3) self.mlp_base = ... # Define MLP base with 767,040 parameters class CameraOptimizer(nn.Module): def __init__(self): super(CameraOptimizer, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class RayGenerator(nn.Module): def __init__(self): super(RayGenerator, self).__init__() self.pose_adjustment = nn.Parameter(torch.Tensor(224, 6)) class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.field = Field() self.proposal_networks = nn.ModuleList([ProposalNetwork(), ProposalNetwork()]) self.train_camera_optimizer = CameraOptimizer() self.train_ray_generator = RayGenerator() self.eval_camera_optimizer = CameraOptimizer() self.eval_ray_generator = RayGenerator() def forward(self, x): pass # Define the forward pass based on the model components
unpickled admin bot 04/30/2023 1:44 PM
only one gate?
Avatar
Avatar
unpickled admin bot
only one gate?
ignore this one
Avatar
unpickled admin bot 04/30/2023 1:44 PM
oh
Avatar
ok so running the script
13:47
checkpoint = torch.load("./step-000029999.ckpt", map_location=torch.device('cpu')) You need this for CPU device (like my wsl)
Avatar
Avatar
sahuang
checkpoint = torch.load("./step-000029999.ckpt", map_location=torch.device('cpu')) You need this for CPU device (like my wsl)
if you dont have cuda setup
Avatar
yeah
13:48
i didnt
13:48
but its ok
Avatar
unpickled admin bot 04/30/2023 1:48 PM
i never set up cuda
13:48
cuz
13:48
lazy!
13:48
(never needed too)
Avatar
i set up 3+times before but all deleted later cuz no disk space left
Avatar
Its somewhat of 3D view generation so i guess it would be from text to 3d view
13:49
you mean the model data could construct some 3d flag view?
Avatar
possible
13:52
Holy crap I have no clue about all this stuff works Learning to reverse ML models seems like a fruitful venture in 5 years or so
Avatar
ok im asking admin
13:55
if that render is the way
13:58
Segal — Today at 1:57 PM do you know what the model actually is?
Avatar
Nerf?
13:58
right
13:58
Segal — Today at 1:58 PM so have you looked into what nerfs actualyl do
Avatar
no XD
Avatar
sahuang — Today at 1:59 PM nope not yet we need to read and understand how nerf works? Segal — Today at 1:59 PM not too much just understand what they do and that should answer your question
13:59
ok
Avatar
3d scene from partial 2d views (edited)
Avatar
Segal — Today at 2:01 PM im gonna be nice because i dont want you to go down a rabbithole do not implement the rendering yourself look at nerfstudio
Avatar
Avatar
deuterium
yup
Avatar
manually writing a nerf renderer as a challenge would not be feasible 💀
Avatar
pain to fucking install
14:02
can you try the google collab setup?
14:05
it has one right?
Avatar
yeah
Avatar
so you just import that file into the studio? or use the data to reconstruct that model at that state
14:09
kinda not sure what to do
Avatar
Avatar
sahuang
so you just import that file into the studio? or use the data to reconstruct that model at that state
no clue how to do that
14:12
https://datagen.tech/guides/synthetic-data/neural-radiance-field-nerf/ still no clue how does password comes into picture If we load a model, this collab asks us to upload images for various 2d views
Understand how Neural Radiance Field (NeRF) works and how you can use it to generate novel images from a 3D scene, based on a partial set of images.
14:15
but holy af thats new af tech, NVIDIAs blog is 25 march The paper is still in preprint
Avatar
lmao
Avatar
okay no my bad 2020
14:24
input 3d location and 2d viewing angle output color rgb and volume density
14:25
argh
Exported 121 message(s)